464 research outputs found

    Cyclic Low-Density MDS Array Codes

    Get PDF
    We construct two infinite families of low density MDS array codes which are also cyclic. One of these families includes the first such sub-family with redundancy parameter r > 2. The two constructions have different algebraic formulations, though they both have the same indirect structure. First MDS codes that are not cyclic are constructed and then by applying a certain mapping to their parity check matrices, non-equivalent cyclic codes with the same distance and density properties are obtained. Using the same proof techniques, a third infinite family of quasi-cyclic codes can be constructed

    Harmonic analysis of neural networks

    Get PDF
    Neural networks models have attracted a lot of interest in recent years mainly because there were perceived as a new idea for computing. These models can be described as a network in which every node computes a linear threshold function. One of the main difficulties in analyzing the properties of these networks is the fact that they consist of nonlinear elements. I will present a novel approach, based on harmonic analysis of Boolean functions, to analyze neural networks. In particular I will show how this technique can be applied to answer the following two fundamental questions (i) what is the computational power of a polynomial threshold element with respect to linear threshold elements? (ii) Is it possible to get exponentially many spurious memories when we use the outer-product method for programming the Hopfield model

    Some new EC/AUED codes

    Get PDF
    A novel construction that differs from the traditional way of constructing systematic EC/AUED/(error-correcting/all unidirectional error-detecting) codes is presented. The usual method is to take a systematic t-error-correcting code and then append a tail so that the code can detect more than t errors when they are unidirectional. In the authors' construction, the t-error-correcting code is modified in such a way that the weight distribution of the original code is reduced. The authors then have to add a smaller tail. Frequently they have less redundancy than the best available systematic t-EC/AUED codes

    X-code: MDS array codes with optimal encoding

    Get PDF
    We present a new class of MDS (maximum distance separable) array codes of size n×n (n a prime number) called X-code. The X-codes are of minimum column distance 3, namely, they can correct either one column error or two column erasures. The key novelty in X-code is that it has a simple geometrical construction which achieves encoding/update optimal complexity, i.e., a change of any single information bit affects exactly two parity bits. The key idea in our constructions is that all parity symbols are placed in rows rather than columns

    An on-line algorithm for checkpoint placement

    Get PDF
    Checkpointing is a common technique for reducing the time to recover from faults in computer systems. By saving intermediate states of programs in a reliable storage, checkpointing enables to reduce the lost processing time caused by faults. The length of the intervals between checkpoints affects the execution time of programs. Long intervals lead to long re-processing time, while too frequent checkpointing leads to high checkpointing overhead. In this paper we present an on-line algorithm for placement of checkpoints. The algorithm uses on-line knowledge of the current cost of a checkpoint when it decides whether or not to place a checkpoint. We show how the execution time of a program using this algorithm can be analyzed. The total overhead of the execution time when the proposed algorithm is used is smaller than the overhead when fixed intervals are used. Although the proposed algorithm uses only on-line knowledge about the cost of checkpointing, its behavior is close to the off-line optimal algorithm that uses a complete knowledge of checkpointing cost

    Rewriting Codes for Joint Information Storage in Flash Memories

    Get PDF
    Memories whose storage cells transit irreversibly between states have been common since the start of the data storage technology. In recent years, flash memories have become a very important family of such memories. A flash memory cell has q states—state 0.1.....q-1 - and can only transit from a lower state to a higher state before the expensive erasure operation takes place. We study rewriting codes that enable the data stored in a group of cells to be rewritten by only shifting the cells to higher states. Since the considered state transitions are irreversible, the number of rewrites is bounded. Our objective is to maximize the number of times the data can be rewritten. We focus on the joint storage of data in flash memories, and study two rewriting codes for two different scenarios. The first code, called floating code, is for the joint storage of multiple variables, where every rewrite changes one variable. The second code, called buffer code, is for remembering the most recent data in a data stream. Many of the codes presented here are either optimal or asymptotically optimal. We also present bounds to the performance of general codes. The results show that rewriting codes can integrate a flash memory’s rewriting capabilities for different variables to a high degree

    Miscorrection probability beyond the minimum distance

    Get PDF
    The miscorrection probability of a list decoder is the probability that the decoder will have at least one non-causal codeword in its decoding sphere. Evaluating this probability is important when using a list-decoder as a conventional decoder since in that case we require the list to contain at most one codeword for most of the errors. A lower bound on the miscorrection is the main result. The key ingredient in the proof is a new combinatorial upper bound on the list-size for a general q−ary block code. This bound is tighter than the best known on large alphabets, and it is shown to be very close to the algebraic bound for Reed-Solomon codes. Finally we discuss two known upper bounds on the miscorrection probability and unify them for linear MDS codes

    Delay-insensitive pipelined communication on parallel buses

    Get PDF
    Consider a communication channel that consists of several subchannels transmitting simultaneously and asynchronously. As an example of this scheme, we can consider a board with several chips. The subchannels represent wires connecting between the chips where differences in the lengths of the wires might result in asynchronous reception. In current technology, the receiver acknowledges reception of the message before the transmitter sends the following message. Namely, pipelined utilization of the channel is not possible. Our main contribution is a scheme that enables transmission without an acknowledgment of the message, therefore enabling pipelined communication and providing a higher bandwidth. However, our scheme allows for a certain number of transitions from a second message to arrive before reception of the current message has been completed, a condition that we call skew. We have derived necessary and sufficient conditions for codes that can tolerate a certain amount of skew among adjacent messages (therefore, allowing for continuous operation) and detect a larger amount of skew when the original skew is exceeded. These results generalize previously known results. We have constructed codes that satisfy the necessary and sufficient conditions, studied their optimality, and devised efficient decoding algorithms. To the best of our knowledge, this is the first known scheme that permits efficient asynchronous communications without acknowledgment. Potential applications are in on-chip, on-board, and board to board communications, enabling much higher communication bandwidth

    Networks of Relations for Representation, Learning, and Generalization

    Get PDF
    We propose representing knowledge as a network of relations. Each relation relates only a few continuous or discrete variables, so that any overall relationship among the many variables treated by the network winds up being distributed throughout the network. Each relation encodes which combinations of values correspond to past experience for the variables related by the relation. Variables may or may not correspond to understandable aspects of the situation being modeled by the network. A distributed calculational process can be used to access the information stored in such a network, allowing the network to function as an associative memory. This process in its simplest form is purely inhibitory, narrowing down the space of possibilities as much as possible given the data to be matched. In contrast with methods that always retrieve a best fit for all variables, this method can return values for inferred variables while leaving non-inferable variables in an unknown or partially known state. In contrast with belief propagation methods, this method can be proven to converge quickly and uniformly for any network topology, allowing networks to be as interconnected as the relationships warrant, with no independence assumptions required. The generalization properties of such a memory are aligned with the network's relational representation of how the various aspects of the modeled situation are related
    corecore